robotic grasping
A versatile robotic hand with 3D perception, force sensing for autonomous manipulation
Correll, Nikolaus, Kriegman, Dylan, Otto, Stephen, Watson, James
We describe a force-controlled robotic gripper with built-in tactile and 3D perception. We also describe a complete autonomous manipulation pipeline consisting of object detection, segmentation, point cloud processing, force-controlled manipulation, and symbolic (re)-planning. The design emphasizes versatility in terms of applications, manufacturability, use of commercial off-the-shelf parts, and open-source software. We validate the design by characterizing force control (achieving up to 32N, controllable in steps of 0.08N), force measurement, and two manipulation demonstrations: assembly of the Siemens gear assembly problem, and a sensor-based stacking task requiring replanning. These demonstrate robust execution of long sequences of sensor-based manipulation tasks, which makes the resulting platform a solid foundation for researchers in task-and-motion planning, educators, and quick prototyping of household, industrial and warehouse automation tasks.
- North America > United States > Colorado > Boulder County > Boulder (0.14)
- Asia > South Korea > Daejeon > Daejeon (0.05)
- North America > United States > Utah (0.04)
Learning Any-View 6DoF Robotic Grasping in Cluttered Scenes via Neural Surface Rendering
Jauhri, Snehal, Lunawat, Ishikaa, Chalvatzaki, Georgia
Robotic manipulation is crucial in various applications, like industrial automation, assistive robots, etc. A key component for manipulation is effective 6DoF grasping in cluttered environments, as this ability would enhance the efficiency, versatility, and autonomy of robotic systems operating in unstructured environments. Grasping effectively with limited sensory input reduces the need for extensive exploration and multiple viewpoints, enabling efficient and time-saving solutions to robotic applications. Robotic grasping involves generating suitable poses for the robot's end-effector given some sensory information (e.g., visual data). While planar bin picking, i.e., top-down 4DoF grasping (3D position and roll orientation) with two-fingered or suction grippers, has mainly been solved thanks to deep learning models [1-4], 6DoF grasping in the wild, i.e., grasping in the SE(3) space of 3D positions and 3D rotations from any viewpoint remains a challenge [5, 6]. Embodied AI agents, e.g., mobile manipulation robots [7, 8], are expected to perform manipulation tasks similar to humans; humans can leverage geometric information from limited views and mental models to grasp objects without exploring to reconstruct the scene. Such an elaborate plan for grasping in open spaces with clutter would require that robots, given some spatial sensory information, e.g., 3D pointcloud data, can reconstruct the scene, understand the graspable area of different objects, and finally select grasps that are highly likely to succeed, both in terms of lifting an object for a subsequent manipulation task, but crucially, without colliding and potentially damaging the surrounding environment.
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.05)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Asia > India (0.04)
ShapeShift: Superquadric-based Object Pose Estimation for Robotic Grasping
Zeng, E. Zhixuan, Chen, Yuhao, Wong, Alexander
Object pose estimation is a critical task in robotics for precise object manipulation. However, current techniques heavily rely on a reference 3D object, limiting their generalizability and making it expensive to expand to new object categories. Direct pose predictions also provide limited information for robotic grasping without referencing the 3D model. Keypoint-based methods offer intrinsic descriptiveness without relying on an exact 3D model, but they may lack consistency and accuracy. To address these challenges, this paper proposes ShapeShift, a superquadric-based framework for object pose estimation that predicts the object's pose relative to a primitive shape which is fitted to the object. The proposed framework offers intrinsic descriptiveness and the ability to generalize to arbitrary geometric shapes beyond the training set.
Robotic Grasping of Novel Objects
We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. We demonstrate on a robotic manipulation platform that this approach successfully grasps a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set.
ep.352: Robotics Grasping and Manipulation Competition Spotlight, with Yu Sun
Yu Sun, Professor of Computer Science and Engineering at the University of South Florida, created and organized the Robotic Grasping and Manipulation Competition. Yu talks about the impact robots will have in domestic environments, the disparity between industry and academia showcased by competitions, and the commercialization of research. Yu Sun is a Professor in the Department of Computer Science and Engineering at the University of South Florida (Assistant Professor 2009-2015, Associate Professor 2015-2020, Associate Chair of Graduate Affairs 2018-2020). He was a Visiting Associate Professor at Stanford University from 2016 to 2017, and received his Ph.D. degree in Computer Science from the University of Utah in 2007. Then he had his Postdoctoral training at Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA (2007-2008) and the University of Utah (2008-2009).
- North America > United States > Utah (0.55)
- North America > United States > Florida (0.55)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.29)
Robot Grasping and Manipulation
Britannica defines a robot as any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner. By extension, robotics is the engineering discipline dealing with robots' design, construction, and operation. Robots have increasingly been used in environments that require object grasping and manipulation. They are helpful in households where there may be a need to pick and place objects such as books, balls, toys, and manufacturing productions lines to pick and move products such as packaged goods and mechanical parts. Research in robotic grasping and manipulation is believed to have started as far back as the 1970s, mainly when science-fiction classic Westworld staged robots in a fictitious film.
Research Challenges and Progress in Robotic Grasping and Manipulation Competitions
Sun, Yu, Falco, Joe, Roa, Maximo A., Calli, Berk
This paper discusses recent research progress in robotic grasping and manipulation in the light of the latest Robotic Grasping and Manipulation Competitions (RGMCs). We first provide an overview of past benchmarks and competitions related to the robotics manipulation field. Then, we discuss the methodology behind designing the manipulation tasks in RGMCs. We provide a detailed analysis of key challenges for each task and identify the most difficult aspects based on the competing teams' performance in recent years. We believe that such an analysis is insightful to determine the future research directions for the robotic manipulation domain.
- North America > United States > Colorado (0.04)
- Europe > Denmark > Southern Denmark (0.04)
- North America > United States > Florida (0.04)
- (5 more...)
- Research Report (0.70)
- Overview (0.54)
- Government (0.46)
- Leisure & Entertainment (0.46)
Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection
Levine, Sergey, Pastor, Peter, Krizhevsky, Alex, Quillen, Deirdre
We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Hudson County > Secaucus (0.04)
- Asia > Middle East > Jordan (0.04)
Why Tactile Intelligence Is the Future of Robotic Grasping
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. The simple task of picking something up is not as easy as it seems. Roboticists aim to develop a robot that can pick up anything--but today most robots perform "blind grasping," where they're dedicated to picking up an object from the same location every time. If anything changes, such as the shape, texture, or location of the object, the robot won't know how to respond, and the grasp attempt will most likely fail.
- North America > Canada > Quebec > Montreal (0.05)
- North America > Canada > Quebec > Capitale-Nationale Region > Québec (0.04)
- North America > Canada > Quebec > Capitale-Nationale Region > Quebec City (0.04)
How Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves
You are likely pretty good at picking things up. Part of the reason that you're pretty good at picking things up is that when you were little, you spent a lot of time trying and failing to pick things up, and learning from your experiences. For roboticists who don't want to wait through the equivalent of an entire robotic childhood, there are ways to streamline the process: at Google Research, they've set up more than a dozen robotic arms and let them work for months on picking up objects that are heavy, light, flat, large, small, rigid, soft, and translucent (although not all at once). We talk to the researchers about how their approach is unique, and why 800,000 grasps (!) is just the beginning. Part of what makes animals so good at grasping things are our eyes, as opposed to just our hands.